41 research outputs found

    Operation of an ADR Using Helium Exchange Gas as a Substitute for a Failed Heat Switch

    Get PDF
    The Soft X-ray Spectrometer (SXS) is one of four instruments on the Japanese Astro-H mission, which is currently planned for launch in late 2015. The SXS will perform imaging spectroscopy in the soft X-ray band (0.3-12 keV) using a 6 6 pixel array of microcalorimeters cooled to 50 mK. The detectors are cooled by a 3-stage adiabatic demagnetization refrigerator (ADR) that rejects heat to either a superfluid helium tank (at 1.2 K) or to a 4.5 K Joule-Thomson (JT) cryocooler. Four gas-gap heat switches are used in the assembly to manage heat flow between the ADR stages and the heat sinks. The engineering model (EM) ADR was assembled and performance tested at NASA/GSFC in November 2011, and subsequently installed in the EM dewar at Sumitomo Heavy Industries, Japan. During the first cooldown in July 2012, a failure of the heat switch that linked the two colder stages of the ADR to the helium tank was observed. Operation of the ADR requires some mechanism for thermally linking the salt pills to the heat sink, and then thermally isolating them. With the failed heat switch unable to perform this function, an alternate plan was devised which used carefully controlled amounts of exchange gas in the dewar's guard vacuum to facilitate heat exchange. The process was successfully demonstrated in November 2012, allowing the ADR to cool the detectors to 50 mK for hold times in excess of 10 h. This paper describes the exchange-gas-assisted recycling process, and the strategies used to avoid helium contamination of the detectors at low temperature

    Enabling multi-level relevance feedback on PubMed by integrating rank learning into DBMS

    Get PDF
    Background: Finding relevant articles from PubMed is challenging because it is hard to express the user's specific intention in the given query interface, and a keyword query typically retrieves a large number of results. Researchers have applied machine learning techniques to find relevant articles by ranking the articles according to the learned relevance function. However, the process of learning and ranking is usually done offline without integrated with the keyword queries, and the users have to provide a large amount of training documents to get a reasonable learning accuracy. This paper proposes a novel multi-level relevance feedback system for PubMed, called RefMed, which supports both ad-hoc keyword queries and a multi-level relevance feedback in real time on PubMed. Results: RefMed supports a multi-level relevance feedback by using the RankSVM as the learning method, and thus it achieves higher accuracy with less feedback. RefMed "tightly" integrates the RankSVM into RDBMS to support both keyword queries and the multi-level relevance feedback in real time; the tight coupling of the RankSVM and DBMS substantially improves the processing time. An efficient parameter selection method for the RankSVM is also proposed, which tunes the RankSVM parameter without performing validation. Thereby, RefMed achieves a high learning accuracy in real time without performing a validation process. RefMed is accessible at http://dm.postech.ac.kr/refmed. Conclusions: RefMed is the first multi-level relevance feedback system for PubMed, which achieves a high accuracy with less feedback. It effectively learns an accurate relevance function from the user's feedback and efficiently processes the function to return relevant articles in real time.1114Nsciescopu

    Dynamic summarization of bibliographic-based data

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Traditional information retrieval techniques typically return excessive output when directed at large bibliographic databases. Natural Language Processing applications strive to extract salient content from the excessive data. Semantic MEDLINE, a National Library of Medicine (NLM) natural language processing application, highlights relevant information in PubMed data. However, Semantic MEDLINE implements manually coded schemas, accommodating few information needs. Currently, there are only five such schemas, while many more would be needed to realistically accommodate all potential users. The aim of this project was to develop and evaluate a statistical algorithm that automatically identifies relevant bibliographic data; the new algorithm could be incorporated into a dynamic schema to accommodate various information needs in Semantic MEDLINE, and eliminate the need for multiple schemas.</p> <p>Methods</p> <p>We developed a flexible algorithm named Combo that combines three statistical metrics, the Kullback-Leibler Divergence (KLD), Riloff's RlogF metric (RlogF), and a new metric called PredScal, to automatically identify salient data in bibliographic text. We downloaded citations from a PubMed search query addressing the genetic etiology of bladder cancer. The citations were processed with SemRep, an NLM rule-based application that produces semantic predications. SemRep output was processed by Combo, in addition to the standard Semantic MEDLINE genetics schema and independently by the two individual KLD and RlogF metrics. We evaluated each summarization method using an existing reference standard within the task-based context of genetic database curation.</p> <p>Results</p> <p>Combo asserted 74 genetic entities implicated in bladder cancer development, whereas the traditional schema asserted 10 genetic entities; the KLD and RlogF metrics individually asserted 77 and 69 genetic entities, respectively. Combo achieved 61% recall and 81% precision, with an F-score of 0.69. The traditional schema achieved 23% recall and 100% precision, with an F-score of 0.37. The KLD metric achieved 61% recall, 70% precision, with an F-score of 0.65. The RlogF metric achieved 61% recall, 72% precision, with an F-score of 0.66.</p> <p>Conclusions</p> <p>Semantic MEDLINE summarization using the new Combo algorithm outperformed a conventional summarization schema in a genetic database curation task. It potentially could streamline information acquisition for other needs without having to hand-build multiple saliency schemas.</p

    Automation of a problem list using natural language processing

    Get PDF
    BACKGROUND: The medical problem list is an important part of the electronic medical record in development in our institution. To serve the functions it is designed for, the problem list has to be as accurate and timely as possible. However, the current problem list is usually incomplete and inaccurate, and is often totally unused. To alleviate this issue, we are building an environment where the problem list can be easily and effectively maintained. METHODS: For this project, 80 medical problems were selected for their frequency of use in our future clinical field of evaluation (cardiovascular). We have developed an Automated Problem List system composed of two main components: a background and a foreground application. The background application uses Natural Language Processing (NLP) to harvest potential problem list entries from the list of 80 targeted problems detected in the multiple free-text electronic documents available in our electronic medical record. These proposed medical problems drive the foreground application designed for management of the problem list. Within this application, the extracted problems are proposed to the physicians for addition to the official problem list. RESULTS: The set of 80 targeted medical problems selected for this project covered about 5% of all possible diagnoses coded in ICD-9-CM in our study population (cardiovascular adult inpatients), but about 64% of all instances of these coded diagnoses. The system contains algorithms to detect first document sections, then sentences within these sections, and finally potential problems within the sentences. The initial evaluation of the section and sentence detection algorithms demonstrated a sensitivity and positive predictive value of 100% when detecting sections, and a sensitivity of 89% and a positive predictive value of 94% when detecting sentences. CONCLUSION: The global aim of our project is to automate the process of creating and maintaining a problem list for hospitalized patients and thereby help to guarantee the timeliness, accuracy and completeness of this information

    Linking genes to literature: text mining, information extraction, and retrieval applications for biology

    Get PDF
    Efficient access to information contained in online scientific literature collections is essential for life science research, playing a crucial role from the initial stage of experiment planning to the final interpretation and communication of the results. The biological literature also constitutes the main information source for manual literature curation used by expert-curated databases. Following the increasing popularity of web-based applications for analyzing biological data, new text-mining and information extraction strategies are being implemented. These systems exploit existing regularities in natural language to extract biologically relevant information from electronic texts automatically. The aim of the BioCreative challenge is to promote the development of such tools and to provide insight into their performance. This review presents a general introduction to the main characteristics and applications of currently available text-mining systems for life sciences in terms of the following: the type of biological information demands being addressed; the level of information granularity of both user queries and results; and the features and methods commonly exploited by these applications. The current trend in biomedical text mining points toward an increasing diversification in terms of application types and techniques, together with integration of domain-specific resources such as ontologies. Additional descriptions of some of the systems discussed here are available on the internet

    Hitomi (ASTRO-H) X-ray Astronomy Satellite

    Get PDF
    The Hitomi (ASTRO-H) mission is the sixth Japanese x-ray astronomy satellite developed by a large international collaboration, including Japan, USA, Canada, and Europe. The mission aimed to provide the highest energy resolution ever achieved at E  >  2  keV, using a microcalorimeter instrument, and to cover a wide energy range spanning four decades in energy from soft x-rays to gamma rays. After a successful launch on February 17, 2016, the spacecraft lost its function on March 26, 2016, but the commissioning phase for about a month provided valuable information on the onboard instruments and the spacecraft system, including astrophysical results obtained from first light observations. The paper describes the Hitomi (ASTRO-H) mission, its capabilities, the initial operation, and the instruments/spacecraft performances confirmed during the commissioning operations for about a month

    Semantic Processing to Enhance Retrieval of Diagnosis Citations from Medline

    No full text
    We investigate the use of natural language processing (NLP) for enhancing precision when retrieving Medline citations on diagnostic procedure. Inter-annotator agreement is described as part of the evaluation method, and preliminary results are presented
    corecore